The Implicit Adaptation to Temporal Regularities
نویسندگان
چکیده
منابع مشابه
Infants use temporal regularities to chunk objects in memory.
Infants, like adults, can maintain only a few items in working memory, but can overcome this limit by creating more efficient representations, or "chunks." Previous research shows that infants can form chunks using shared features or spatial proximity between objects. Here we asked whether infants also can create chunked representations using regularities that unfold over time. Thirteen-month o...
متن کاملChildren's implicit learning of graphotactic and morphological regularities.
In French, the transcription of the same sound can be guided by both probabilistic graphotactic constraints (e.g., /epsilon t/ is more often transcribed ette after -v than after -f) and morphological constraints (e.g., /epsilon t/ is always transcribed ette when used as a diminutive suffix). Three experiments showed that pseudo-word spellings of 8-to 11-year-old children and adults were influen...
متن کاملImplicit Learning of Arithmetic Regularities Is Facilitated by Proximal Contrast
Natural number arithmetic is a simple, powerful and important symbolic system. Despite intense focus on learning in cognitive development and educational research many adults have weak knowledge of the system. In current study participants learn arithmetic principles via an implicit learning paradigm. Participants learn not by solving arithmetic equations, but through viewing and evaluating exa...
متن کاملImplicit Learning of Stimulus Regularities Increases Cognitive Control
In this study we aim to examine how the implicit learning of statistical regularities of successive stimuli affects the ability to exert cognitive control. In three experiments, sequences of flanker stimuli were segregated into pairs, with the second stimulus contingent on the first. Response times were reliably faster for the second stimulus if its congruence tended to match the congruence of ...
متن کاملImplicit Temporal Differences
In reinforcement learning, the TD(λ) algorithm is a fundamental policy evaluation method with an efficient online implementation that is suitable for large-scale problems. One practical drawback of TD(λ) is its sensitivity to the choice of the step-size. It is an empirically well-known fact that a large step-size leads to fast convergence, at the cost of higher variance and risk of instability....
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of Vision
سال: 2017
ISSN: 1534-7362
DOI: 10.1167/17.10.750